22 research outputs found

    Charting the Constellation of Science Reform

    Get PDF
    Over the past decade, a sense of urgency has been building in the scientific community. They have discovered that much of the literature body is unreliable and possibly invalid thanks to weak theory, flawed methods, and shoddy statistics. This is driven by a widespread competitive, secretive approach to research, which, in turn, is fueled by toxic academic incentive structures. Many in the community have decided to address these issues, coming together in what has become known as the ‘scientific reform movement’. While these ‘reformers’ are often spoken of a single, homogeneous entity, my findings underscore the heterogeneity of the reform community. In my dissertation, I explore the scientific reform group using ethnography and social network analysis tools. I primarily studied their online Twitter engagements to understand their culture, practices, and structure. With Wenger’s Community of Practice theory as an interpretive framework, I analyze scientific reform discourse playing out between reformers on Twitter. Using quantitative Twitter friend/follow data, I investigate which reform members engage online, using following behavior to understand aspects of their social structure. I link the quantitative exploration with my qualitative analysis, to conclude that while the reformers are united by their interest in improving science, they are better characterized as a constellation of small communities of practice, each with their own norms, priorities, and unique approach to the group enterprise of scientific reform. My investigation is an exercise in reflexivity as I have studied a community in which I am an active part

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. Significance statement: It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void— reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement: The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.</p

    The Tone Debate:Knowledge, Self, and Social Order

    Get PDF
    In the replication crisis in psychology, a “tone debate” has developed. It concerns the question of how to conduct scientific debate effectively and ethically. How should scientists give critique without unnecessarily damaging relations? The increasing use of Facebook and Twitter by researchers has made this issue especially pressing, as these social technologies have greatly expanded the possibilities for conversation between academics, but there is little formal control over the debate. In this article, we show that psychologists have tried to solve this issue with various codes of conduct, with an appeal to virtues such as humility, and with practices of self-transformation. We also show that the polemical style of debate, popular in many scientific communities, is itself being questioned by psychologists. Following Shapin and Schaffer’s analysis of the ethics of Robert Boyle’s experimental philosophy in the 17th century, we trace the connections between knowledge, social order, and subjectivity as they are debated and revised by present-day psychologists

    Experimenter as automaton; experimenter as human:Exploring the position of the researcher in scientific research

    Get PDF
    The crisis of confidence in the social sciences has many corollaries which impact our research practices. One of these is a push towards maximal and mechanical objectivity in quantitative research. This stance is reinforced by major journals and academic institutions that subtly yet certainly link objectivity with integrity and rigor. The converse implication of this may be an association between subjectivity and low quality. Subjectivity is one of qualitative methodology's best assets, however. In qualitative methodology, that subjectivity is often given voice through reflexivity. It is used to better understand our own role within the research process, and is a means through which the researcher may oversee how they influence their research. Given that the actions of researchers have led to the poor reproducibility characterising the crisis of confidence, it is worthwhile to consider whether reflexivity can help improve the validity of research findings in quantitative psychology. In this report, we describe a combination approach of research: the data of a series of interviews helps us elucidate the link between reflexive practice and quality of research, through the eyes of practicing academics. Through our exploration of the position of the researcher in their research, we shed light on how the reflections of the researcher can impact the quality of their research findings, in the context of the current crisis of confidence. The validity of these findings is tempered, however, by limitations to the sample, and we advise caution on the part of our audience in their reading of our conclusions.</p

    When and Why to Replicate:As Easy as 1, 2, 3?

    Get PDF
    The crisis of confidence in psychology has prompted vigorous and persistent debate in the scientific community concerning the veracity of the findings of psychological experiments. This discussion has led to changes in psychology's approach to research, and several new initiatives have been developed, many with the aim of improving our findings. One key advancement is the marked increase in the number of replication studies conducted. We argue that while it is important to conduct replications as part of regular research protocol, it is neither efficient nor useful to replicate results at random. We recommend adopting a methodical approach toward the selection of replication targets to maximize the impact of the outcomes of those replications, and minimize waste of scarce resources. In the current study, we demonstrate how a Bayesian re-analysis of existing research findings followed by a simple qualitative assessment process can drive the selection of the best candidate article for replication.</p

    Rethinking Remdesivir for COVID-19: A Bayesian Reanalysis of Trial Findings

    Get PDF
    BackgroundFollowing testing in clinical trials, the use of remdesivir for treatment of COVID-19 has been authorized for use in parts of the world, including the USA and Europe. Early authorizations were largely based on results from two clinical trials. A third study published by Wang et al. was underpowered and deemed inconclusive. Although regulators have shown an interest in interpreting the Wang et al. study, under a frequentist framework it is difficult to determine if the non-significant finding was caused by a lack of power or by the absence of an effect. Bayesian hypothesis testing does allow for quantification of evidence in favor of the absence of an effect.FindingsResults of our Bayesian reanalysis of the three trials show ambiguous evidence for the primary outcome of clinical improvement and moderate evidence against the secondary outcome of decreased mortality rate. Additional analyses of three studies published after initial marketing approval support these findings.ConclusionsWe recommend that regulatory bodies take all available evidence into account for endorsement decisions. A Bayesian approach can be beneficial, in particular in case of statistically non-significant results. This is especially pressing when limited clinical efficacy data is available

    The Effect of Preregistration on Trust in Empirical Research Findings:Results of a Registered Report

    Get PDF
    The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75

    Weekly reports for R.V. Polarstern expedition PS103 (2016-12-16 - 2017-02-03, Cape Town - Punta Arenas), German and English version

    Get PDF
    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1(N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMOsys model

    The Process of Replication Target Selection in Psychology: What to Consider?

    Get PDF
    Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well justified, systematic, and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications, and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target

    Perspectives on Scientific Error

    Get PDF
    Theoretical arguments and empirical investigations indicate that a high proportion of published findings do not replicate and are likely false. The current position paper provides a broad perspective on scientific error, which may lead to replication failures. This broad perspective focuses on reform history and on opportunities for future reform. We organize our perspective along four main themes: institutional reform, methodological reform, statistical reform and publishing reform. For each theme, we illustrate potential errors by narrating the story of a fictional researcher during the research cycle. We discuss future opportunities for reform. The resulting agenda provides a resource to usher in an era that is marked by a research culture that is less error-prone and a scientific publication landscape with fewer spurious findings
    corecore